The User-reported Critical Incident Method for Remote Usability Evaluation
نویسندگان
چکیده
Much traditional user interface evaluation is conducted in usability laboratories, where a small number of selected users is directly observed by trained evaluators. However, as the network itself and the remote work setting have become intrinsic parts of usage patterns, evaluators often have limited access to representative users for usability evaluation in the laboratory and the users' work context is difficult or impossible to reproduce in a laboratory setting. These barriers to usability evaluation led to extending the concept of usability evaluation beyond the laboratory, typically using the network itself as a bridge to take interface evaluation to a broad range of users in their natural work settings. The over-arching goal of this work is to develop and evaluate a cost-effective remote usability evaluation method for real-world applications used by real users doing real tasks in real work environments. This thesis reports the development of such a method, and the results of a study to: • investigate feasibility and effectiveness of involving users with to identify and report critical incidents in usage, • investigate feasibility and effectiveness of transforming remotely-gathered critical incidents into usability problem descriptions, and • gain insight into various parameters associated with the method. DEDICATION To my family, whose unconditional love and support inspire me and keep me reaching for higher stars. " ¡Los quiero mucho! " ACKNOWLEDGMENTS I would like to thank my advisor and role model, Dr. H. Rex Hartson for his guidance throughout my graduate work, and Rieky Keeris for her caring and effort in ensuring I always had enough " research fuel " to keep on going with my work. Their sincerity, love, and friendship are invaluable and I feel proud to be considered their " adopted son ". Dr. Deborah Hix, leading member of the " thesis demolition team " , kept my work focused and I thank her for her guidance and devotion. I would also like to thank the other members of my committee, Dr. Mary Beth Rosson and Dr. Robert C. Williges, who have been helpful and supportive, and contributed greatly to this document. Many other people have contributed to the completion of this work, and I am happy to have an opportunity to thank them: John Kelso, who has worked in the remote evaluation project since we started, for being supportive and for being patient when things went wrong in the usability lab; Pawan Vora, for his help …
منابع مشابه
The User-reported Critical Incident Method at a Glance
The over-arching goal of this work is to discuss the user-reported critical incident method, a costeffective remote usability evaluation method for real-world applications involving real users, doing real tasks in real work environments. Several methods have been developed for conducting usability evaluation without direct observation of a user by an evaluator. However, contrary to the user-rep...
متن کاملTrusting Remote Users... Can They Identify Problems Without Involving Usability Experts?
Based on our belief that critical incident data, observed during usage and associated closely with specific task performance are the most useful kind of formative evaluation data for finding and fixing usability problems, we developed a Remote Usability Evaluation Method (RUEM) that involves real users self-reporting critical incidents encountered in real tasks performed in their normal working...
متن کاملRemote Usability Evaluation at a Glance
Much traditional user interface evaluation is conducted in usability laboratories, where a small number of selected users is directly observed by trained evaluators. However, as the network itself and the remote work setting have become intrinsic parts of usage patterns, evaluators often have limited access to representative users for usability evaluation in the laboratory and the users’ work c...
متن کاملComparative Study of Synchronous Remote and Traditional In-Lab Usability Evaluation Methods
Traditional in lab usability evaluation has been used as the 'standard' evaluation method for evaluating and improving usability of software user interfaces (Andre, Williges, & Hartson, 2000). However, traditional in lab evaluation has its drawbacks such as availability of representative end users, high cost of testing and lack of true representation of a user's actual work environment. To coun...
متن کاملCritical incidents and critical threads in empirical usability evaluation
Empirical usability evaluations (particularly “formative” evaluations, Scriven, 1967) hinge on observing and interpreting critical incidents (Flanagan, 1954) of use: the causes of such critical incidents can often be found in the immediate contexts of their occurrence and can guide specific design changes. However, it can also happen that the causes of a critical incident are temporally remote ...
متن کامل